86 research outputs found

    Clifford Algebras Meet Tree Decompositions

    Get PDF
    We introduce the Non-commutative Subset Convolution - a convolution of functions useful when working with determinant-based algorithms. In order to compute it efficiently, we take advantage of Clifford algebras, a generalization of quaternions used mainly in the quantum field theory. We apply this tool to speed up algorithms counting subgraphs parameterized by the treewidth of a graph. We present an O^*((2^omega + 1)^{tw})-time algorithm for counting Steiner trees and an O^*((2^omega + 2)^{tw})-time algorithm for counting Hamiltonian cycles, both of which improve the previously known upper bounds. The result for Steiner Tree also translates into a deterministic algorithm for Feedback Vertex Set. All of these constitute the best known running times of deterministic algorithms for decision versions of these problems and they match the best obtained running times for pathwidth parameterization under assumption omega = 2

    Constant-Factor FPT Approximation for Capacitated k-Median

    Get PDF
    Capacitated k-median is one of the few outstanding optimization problems for which the existence of a polynomial time constant factor approximation algorithm remains an open problem. In a series of recent papers algorithms producing solutions violating either the number of facilities or the capacity by a multiplicative factor were obtained. However, to produce solutions without violations appears to be hard and potentially requires different algorithmic techniques. Notably, if parameterized by the number of facilities k, the problem is also W[2] hard, making the existence of an exact FPT algorithm unlikely. In this work we provide an FPT-time constant factor approximation algorithm preserving both cardinality and capacity of the facilities. The algorithm runs in time 2^O(k log k) n^O(1) and achieves an approximation ratio of 7+epsilon

    On Problems Equivalent to (min,+)-Convolution

    Get PDF
    In the recent years, significant progress has been made in explaining apparent hardness of improving over naive solutions for many fundamental polynomially solvable problems. This came in the form of conditional lower bounds -- reductions from a problem assumed to be hard. These include 3SUM, All-Pairs Shortest Paths, SAT and Orthogonal Vectors, and others. In the (min,+)-convolution problem, the goal is to compute a sequence c, where c[k] = min_i a[i]+b[k-i], given sequences a and b. This can easily be done in O(n^2) time, but no O(n^{2-eps}) algorithm is known for eps > 0. In this paper we undertake a systematic study of the (min,+)-convolution problem as a hardness assumption. As the first step, we establish equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min,+)-convolution has been used as a building block in algorithms for many problems, notably problems in stringology. It has also already appeared as an ad hoc hardness assumption. We investigate some of these connections and provide new reductions and other results

    When the Optimum is also Blind: a New Perspective on Universal Optimization

    Get PDF
    Consider the following variant of the set cover problem. We are given a universe U={1,...,n} and a collection of subsets C = {S_1,...,S_m} where each S_i is a subset of U. For every element u from U we need to find a set phi(u) from collection C such that u belongs to phi(u). Once we construct and fix the mapping phi from U to C a subset X from the universe U is revealed, and we need to cover all elements from X with exactly phi(X), that is {phi(u)}_{all u from X}. The goal is to find a mapping such that the cover phi(X) is as cheap as possible. This is an example of a universal problem where the solution has to be created before the actual instance to deal with is revealed. Such problems appear naturally in some settings when we need to optimize under uncertainty and it may be actually too expensive to begin finding a good solution once the input starts being revealed. A rich body of work was devoted to investigate such problems under the regime of worst case analysis, i.e., when we measure how good the solution is by looking at the worst-case ratio: universal solution for a given instance vs optimum solution for the same instance. As the universal solution is significantly more constrained, it is typical that such a worst-case ratio is actually quite big. One way to give a viewpoint on the problem that would be less vulnerable to such extreme worst-cases is to assume that the instance, for which we will have to create a solution, will be drawn randomly from some probability distribution. In this case one wants to minimize the expected value of the ratio: universal solution vs optimum solution. Here the bounds obtained are indeed smaller than when we compare to the worst-case ratio. But even in this case we still compare apples to oranges as no universal solution is able to construct the optimum solution for every possible instance. What if we would compare our approximate universal solution against an optimal universal solution that obeys the same rules as we do? We show that under this viewpoint, but still in the stochastic variant, we can indeed obtain better bounds than in the expected ratio model. For example, for the set cover problem we obtain HnH_n approximation which matches the approximation ratio from the classic deterministic setup. Moreover, we show this for all possible probability distributions over UU that have a polynomially large carrier, while all previous results pertained to a model in which elements were sampled independently. Our result is based on rounding a proper configuration IP that captures the optimal universal solution, and using tools from submodular optimization. The same basic approach leads to improved approximation algorithms for other related problems, including Vertex Cover, Edge Cover, Directed Steiner Tree, Multicut, and Facility Location

    5-Approximation for H\mathcal{H}-Treewidth Essentially as Fast as H\mathcal{H}-Deletion Parameterized by Solution Size

    Full text link
    The notion of H\mathcal{H}-treewidth, where H\mathcal{H} is a hereditary graph class, was recently introduced as a generalization of the treewidth of an undirected graph. Roughly speaking, a graph of H\mathcal{H}-treewidth at most kk can be decomposed into (arbitrarily large) H\mathcal{H}-subgraphs which interact only through vertex sets of size O(k)O(k) which can be organized in a tree-like fashion. H\mathcal{H}-treewidth can be used as a hybrid parameterization to develop fixed-parameter tractable algorithms for H\mathcal{H}-deletion problems, which ask to find a minimum vertex set whose removal from a given graph GG turns it into a member of H\mathcal{H}. The bottleneck in the current parameterized algorithms lies in the computation of suitable tree H\mathcal{H}-decompositions. We present FPT approximation algorithms to compute tree H\mathcal{H}-decompositions for hereditary and union-closed graph classes H\mathcal{H}. Given a graph of H\mathcal{H}-treewidth kk, we can compute a 5-approximate tree H\mathcal{H}-decomposition in time f(O(k))nO(1)f(O(k)) \cdot n^{O(1)} whenever H\mathcal{H}-deletion parameterized by solution size can be solved in time f(k)nO(1)f(k) \cdot n^{O(1)} for some function f(k)2kf(k) \geq 2^k. The current-best algorithms either achieve an approximation factor of kO(1)k^{O(1)} or construct optimal decompositions while suffering from non-uniformity with unknown parameter dependence. Using these decompositions, we obtain algorithms solving Odd Cycle Transversal in time 2O(k)nO(1)2^{O(k)} \cdot n^{O(1)} parameterized by bipartite\mathsf{bipartite}-treewidth and Vertex Planarization in time 2O(klogk)nO(1)2^{O(k \log k)} \cdot n^{O(1)} parameterized by planar\mathsf{planar}-treewidth, showing that these can be as fast as the solution-size parameterizations and giving the first ETH-tight algorithms for parameterizations by hybrid width measures.Comment: Conference version to appear at the European Symposium on Algorithms (ESA 2023

    Characterization of the newly isolated lytic bacteriophages KTN6 and KT28 and their efficacy against Pseudomonas aeruginosa biofilm

    Get PDF
    We here describe two novel lytic phages, KT28 and KTN6, infecting Pseudomonas aeruginosa, isolated from a sewage sample from an irrigated field near Wroclaw, in Poland. Both viruses show characteristic features of Pbunalikevirus genus within the Myoviridae family with respect to shape and size of head/tail, as well as LPS host receptor recognition. Genome analysis confirmed the similarity to other PB1-related phages, ranging between 48 and 96%. Pseudomonas phage KT28 has a genome size of 66,381 bp and KTN6 of 65,994 bp. The latent period, burst size, stability and host range was determined for both viruses under standard laboratory conditions. Biofilm eradication efficacy was tested on peg-lid plate assay and PET membrane surface. Significant reduction of colony forming units was observed (70-90%) in 24 h to 72 h old Pseudomonas aeruginosa PAO1 biofilm cultures for both phages. Furthermore, a pyocyanin and pyoverdin reduction tests reveal that tested phages lowers the amount of both secreted dyes in 48-72 h old biofilms. Diffusion and goniometry experiments revealed the increase of diffusion rate through the biofilm matrix after phage application. These characteristics indicate these phages could be used to prevent Pseudomonas aeruginosa infections and biofilm formation. It was also shown, that PB1-related phage treatment of biofilm caused the emergence of stable phage-resistant mutants growing as small colony variants

    A proposed integrated approach for the preclinical evaluation of phage therapy in Pseudomonas infections

    Get PDF
    Bacteriophage therapy is currently resurging as a potential complement/alternative to antibiotic treatment. However, preclinical evaluation lacks streamlined approaches. We here focus on preclinical approaches which have been implemented to assess bacteriophage efficacy against Pseudomonas biofilms and infections. Laser interferometry and profilometry were applied to measure biofilm matrix permeability and surface geometry changes, respectively. These biophysical approaches were combined with an advanced Airway Surface Liquid infection model, which mimics in vitro the normal and CF lung environments, and an in vivo Galleria larvae model. These assays have been implemented to analyze KTN4 (279,593 bp dsDNA genome), a type-IV pili dependent, giant phage resembling phiKZ. Upon contact, KTN4 immediately disrupts the P. aeruginosa PAO1 biofilm and reduces pyocyanin and siderophore production. The gentamicin exclusion assay on NuLi-1 and CuFi-1 cell lines revealed the decrease of extracellular bacterial load between 4 and 7 logs and successfully prevents wild-type Pseudomonas internalization into CF epithelial cells. These properties and the significant rescue of Galleria larvae indicate that giant KTN4 phage is a suitable candidate for in vivo phage therapy evaluation for lung infection applications

    CMS physics technical design report : Addendum on high density QCD with heavy ions

    Get PDF
    Peer reviewe
    corecore